Zhipu AI Unveils GLM-4.7-Flash for Local Coding
Discover GLM-4.7-Flash, Zhipu AI's innovative model for efficient local coding tasks.
Records found: 5
Discover GLM-4.7-Flash, Zhipu AI's innovative model for efficient local coding tasks.
Glyph converts ultra-long text into page images processed by a VLM to achieve 3–4× effective token compression and roughly 4× faster prefill and decoding on 128K inputs.
'GLM-4.6 expands context to 200K tokens, reduces token consumption on real-world coding benchmarks, and offers open weights for local inference and research.'
'Zhipu AI's ComputerRL combines programmatic APIs with GUI actions and a scalable RL infrastructure to build more capable desktop agents. Experimental results show strong gains on the OSWorld benchmark, driven by the API-GUI paradigm and the Entropulse training method.'
'Zhipu AI released GLM-4.5V, an open-source vision-language model that combines a 106B parameter MoE backbone with 12B active parameters, 64K token context and a tunable Thinking Mode for advanced multimodal reasoning.'